13 research outputs found

    Real Time Turbulent Video Perfecting by Image Stabilization and Super-Resolution

    Full text link
    Image and video quality in Long Range Observation Systems (LOROS) suffer from atmospheric turbulence that causes small neighbourhoods in image frames to chaotically move in different directions and substantially hampers visual analysis of such image and video sequences. The paper presents a real-time algorithm for perfecting turbulence degraded videos by means of stabilization and resolution enhancement. The latter is achieved by exploiting the turbulent motion. The algorithm involves generation of a reference frame and estimation, for each incoming video frame, of a local image displacement map with respect to the reference frame; segmentation of the displacement map into two classes: stationary and moving objects and resolution enhancement of stationary objects, while preserving real motion. Experiments with synthetic and real-life sequences have shown that the enhanced videos, generated in real time, exhibit substantially better resolution and complete stabilization for stationary objects while retaining real motion.Comment: Submitted to The Seventh IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2007) August, 2007 Palma de Mallorca, Spai

    Model-based object recognition from a complex binary imagery using genetic algorithm

    Get PDF
    This paper describes a technique for model-based object recognition in a noisy and cluttered environment, by extending the work presented in an earlier study by the authors. In order to accurately model small irregularly shaped objects, the model and the image are represented by their binary edge maps, rather then approximating them with straight line segments. The problem is then formulated as that of finding the best describing match between a hypothesized object and the image. A special form of template matching is used to deal with the noisy environment, where the templates are generated on-line by a Genetic Algorithm. For experiments, two complex test images have been considered and the results when compared with standard techniques indicate the scope for further research in this direction

    Nonlocal similarity image filtering

    Get PDF
    Abstract. We exploit the recurrence of structures at different locations, orientations and scales in an image to perform denoising. While previous methods based on “nonlocal filtering ” identify corresponding patches only up to translations, we consider more general similarity transformations. Due to the additional computational burden, we break the problem down into two steps: First, we extract similarity invariant descriptors at each pixel location; second, we search for similar patches by matching descriptors. The descriptors used are inspired by scale-invariant feature transform (SIFT), whereas the similarity search is solved via the minimization of a cost function adapted from local denoising methods. Our method compares favorably with existing denoising algorithms as tested on several datasets.

    Charterparties: Law, Practice and Emerging Legal Issues

    Get PDF
    The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2~megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering
    corecore